miχPODS: New baseline#

TODO#

  • why does the stability diagram load multiple times

  • Is there an error in ONI phase labeling for the newer runs, presumably SST changed some.

  • Add diagnostic for convective “adjustment” or diff == 1

Introduction#

Analysis of χpod shear mixing measurements recorded over approximately 15 years has revealed that mixing varies on multiple timescales, from hourly to interannual, in the tropical Pacific and Indian Oceans (Moum, 2021; Cherian et al., 2020b; Warner et al., 2016; Warner and Moum, 2019; Pujiana et al., 2018; Moum et al., 2009, 2013). These results show that equatorial mixing is organized in large-scale coherent patterns.

We pask whether these patterns, a decade-long observed mixing dataset (from χpods) to develop mixing process-oriented diagnostics (miχPODs) that assess the variability of parameterized equatorial mixing from monthly to decadal timescales; and its accumu- lated impact on model biases by applying miχPODs to a suite of existing CMIP6-class forced ocean and coupled model experiments from NCAR and GFDL.

Following Large and Gent (1999), the comparison of a 1D mixing model with LES or DNS is clean, because the forcing is the same, and direct, because the evaluation compares turbulence quantities (despite shortcomings such as model errors and idealized forcing). In contrast, the performance of a mixing scheme in global and regional model simulations is tested by comparing the simulated mean state to an observed mean state after a long integration (for e.g. Blanke and Delecluse, 1993; Robitaille and Weaver, 1995; Large and Gent, 1999; Gutjahr et al., 2020). This comparison is not clean, because of errors in forcing fields, nor is it direct, since mean state properties are not a direct output of the mixing scheme.

We propose diagnostics that evaluate the representation of the variability in parameterized equatorial mixing using the multiyear χpod turbulence dataset. While this comparison is not clean, it is direct, a substantial improvement to the current state-of-the-art approach.

These diagnostics are direct evaluations of the models’ ability to simulate the observed variability of ocean vertical mixing and reproduce large-scale patterns recorded in equatorial mixing observations.

Parameters#

  • TODO write down KPP and Jochum equations

  • TODO something about total diffusivity field in model. Total diffusivity \(K = K_D + K_{KPP}\). and mapping from physical process to parameterization.

We perturb the following parameters

  1. “Background mixing” parameters used to simulate mixing by unresolved processes such as breaking internal waves.

    1. A latitudinal varying background diffusivity \(K_D\) using the formulation by Jochum (2003)

    2. A latitudinal varying background viscosity \(K_V'\) derived from \(K_D\) using a Prandtl number of 5 (CHECK).

    3. A constant background viscosity \(K_V\) added to (2).

  2. KPP Boundary layer scheme:

    1. Bulk Richardson number criterion \(Ri_c\) which controls the depth of the actively mixing layer.

  3. KPP shear mixing scheme:

    1. Critical Richardson number \(Ri_0\) that controls when mixing occurs.

    2. Maximum shear mixing viscosity \(ν_0\)

Choices for the parameter changes were influenced by experiments matching 1D KPP simulations to the Large Eddy Simulations of Whitt et al (2022). The range of parameter values explored here is not exhaustive and is only intended to demonstrate the utility of these metrics in tuning and assessing model performance.

Datasets#

Observations#

  1. TAO \(T, S, u, v\) and turbulence (Moum et al XXX)

  • Note which depth χpods are being used

  • Add a figure showing sampling and gaps

Model Simulations#

The effect of these parameter changes are examined in a IPCC-class climate model: the development version of Community Earth System Model version 3 (CESM3), using the Modular Ocean Model version 6 (MOM6) ocean component, and the K Profile Paramerization vertical mixing scheme (Large et al., 1994).

For all simulations, we save a large number of variables every time step, which is an hour.

  1. “old” baseline with KD=1e-5, KV=2e-4 (NCAR/MOM6#238)

  2. old baseline + kpp.lmd.004 with KPP ν0=2.5, Ric=0.2, Ri0=0.5

  3. new baseline : KD=0, KV=0

  4. new baseline : kpp.lmd.004 with KPP ν0=2.5, Ric=0.2, Ri0=0.5

  5. new baseline : kpp.lmd.005 with KPP ν0=2.5, Ric=0.3, Ri0=0.5

Summary#

  1. Turning down the background visc by a factor of 40, (2e-4 → 5e-5 m2/s)

    1. sharpens the EUC relative to TAO

    2. changes the stability diagram for El-Nino warming phase.

  2. Using Ri_c=0.2, so shallower KPP surface layer, is key as Dan mentioned.

Setup#

Hide code cell source
%load_ext autoreload
%load_ext rich
%load_ext watermark

%xmode minimal

import math
import warnings
import os

import cf_xarray as cfxr
import dcpy.datatree  # noqa
import distributed
import holoviews as hv
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tqdm
from datatree import DataTree

import xarray as xr

%aimport pump
from holoviews.plotting.bokeh.util import select_legends

from pump import mixpods

hv.notebook_extension("bokeh")

cfxr.set_options(warn_on_missing_variables=False)
xr.set_options(keep_attrs=True, display_expand_data=False)

plt.style.use("bmh")
plt.rcParams["figure.dpi"] = 140

# https://github.com/dask/distributed/issues/8022
import warnings
import bokeh

# warnings.simplefilter("ignore", bokeh.util.warnings.BokehUserWarning)
import datatree


%watermark -iv
Hide code cell output
The autoreload extension is already loaded. To reload it, use:
  %reload_ext autoreload
Exception reporting mode: Minimal
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/dask_jobqueue/core.py:20: FutureWarning: tmpfile is deprecated and will be removed in a future release. Please use dask.utils.tmpfile instead.
  from distributed.utils import tmpfile
holoviews  : 1.17.0
distributed: 2023.8.0
dcpy       : 0.1.dev387+gd06c937
xarray     : 2023.7.0
pandas     : 2.0.3
numpy      : 1.24.4
bokeh      : 3.2.2
datatree   : 0.0.12
cf_xarray  : 0.8.4
tqdm       : 4.66.1
pump       : 1.0+273.g892024e.dirty
matplotlib : 3.7.1
Hide code cell source
if "client" in locals():
    client.close()  # noqa
    del client  # noqa
if "cluster" in locals():
    cluster.close()  # noqa

import ncar_jobqueue

cluster = ncar_jobqueue.NCARCluster(
    local_directory="/local_scratch/pbs.$PBS_JOBID/dask/spill",
)
cluster.adapt(minimum_jobs=2, maximum_jobs=8)
client = distributed.Client(cluster)
client
Hide code cell output

Client

Client-500a231d-3b92-11ee-a7a2-3cecef1b1260

Connection method: Cluster object Cluster type: dask_jobqueue.PBSCluster
Dashboard: https://jupyterhub.hpc.ucar.edu/stable/user/dcherian/casper/proxy/8787/status

Cluster Info

Read#

MOM6#

Look at catalog#

The catalog only contains surface variables, full depth variables, NO SECTIONS YET.

catalog = pump.catalog.open_mom6_catalog()
catalog

pump-mom6-catalog catalog with 41 dataset(s) from 41 asset(s):

unique
casename 11
stream 5
path 41
baseline 2
levels 2
frequency 3
variables 85
shortname 11
description 9
derived_variables 0
catalog.df["shortname"].unique()

array(['baseline', 'kpp.lmd.004', 'baseline.001', 'baseline.epbl.001',
       'baseline.hb', 'baseline.kpp.lmd.002', 'baseline.kpp.lmd.003',
       'baseline.kpp.lmd.004', 'new_baseline.hb',
       'new_baseline.kpp.lmd.004', 'new_baseline.kpp.lmd.005'],
      dtype=object)

Subset catalog#

Note

Use baseline.hb instead of baseline.001. This was run by Anna-Lena, and does not have gaps, and has heat budget output.

catalog_sub = pd.concat(
    [
        cat.df
        for cat in [
            # pick two "old" baseline simulations
            catalog.search(
                stream="combined",
                levels=65,
                shortname=["baseline.hb", "baseline.kpp.lmd.004"],
            ),
            # and all new baseline simulations
            catalog.search(stream="combined", levels=65, baseline="new"),
        ]
    ]
)

with pd.option_context("display.max_colwidth", None):
    display(catalog_sub[["shortname", "casename", "path"]])

shortname casename path
0 baseline.hb gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.hb /glade/campaign/cgd/oce/projects/pump/cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.hb/run/jsons/combined.json
1 baseline.kpp.lmd.004 gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.004.mixpods /glade/campaign/cgd/oce/projects/pump/cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.004.mixpods/run/jsons/combined.json
0 new_baseline.hb gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.hb /glade/campaign/cgd/oce/projects/pump/cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.hb/run/jsons/combined.json
1 new_baseline.kpp.lmd.004 gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.004.mixpods /glade/campaign/cgd/oce/projects/pump/cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.004.mixpods/run/jsons/combined.json
2 new_baseline.kpp.lmd.005 gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.005.mixpods /glade/campaign/cgd/oce/projects/pump/cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.005.mixpods/run/jsons/combined.json
backup#

This is what I was doing before switching to catalog.search

catalog_sub = {
    k: catalog[k]
    for k in catalog.keys()
    if k == "kpp.lmd.004" or ("baseline" in k and "150" not in k)
}
display(catalog_sub)
{
    'baseline': ('baseline', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.001.mixpods'),
    'kpp.lmd.004': (
        'KPP ν0=2.5, Ric=0.2, Ri0=0.5',
        'gmom.e23.GJRAv3.TL319_t061_zstar_N65.baseline.kpp.lmd.004.mixpods'
    ),
    'new_baseline.hb': ('KD=0, KV=0', 'gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.hb'),
    'new_baseline.kpp.lmd.004': (
        'KPP ν0=2.5, Ric=0.2, Ri0=0.5',
        'gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.004.mixpods'
    ),
    'new_baseline.kpp.lmd.005': (
        'KPP ν0=2.5, Ri0=0.5',
        'gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.005.mixpods'
    )
}

Since there are no sections in the catalog, and we do a bunch of custom things, loop over entries and load the section output.

%autoreload

datasets = {}
with warnings.catch_warnings():
    warnings.simplefilter("ignore", category=UserWarning)
    warnings.simplefilter("ignore", category=FutureWarning)
    for _, (casename, shortname, description) in tqdm.tqdm(
        catalog_sub[["casename", "shortname", "description"]].iterrows()
    ):
        datasets.update(
            {
                shortname: mixpods.load_mom6_sections(casename).assign_attrs(
                    {"title": description}
                )
            }
        )
0it [00:00, ?it/s]
using new oni
1it [00:02,  2.34s/it]
using new oni
2it [00:04,  2.15s/it]
using new oni
3it [00:06,  2.09s/it]
using new oni
4it [00:08,  1.97s/it]
using new oni
5it [00:09,  1.99s/it]
tree = DataTree.from_dict(datasets)
tree

<xarray.DatasetView>
Dimensions:  ()
Data variables:
    *empty*

Reprocess 004, 005 TO DELETE#

from pump import mixpods
from mom6_tools.kerchunk import (
    combine_stream_jsons_as_groups,
    generate_references_for_stream,
)

for casename in [
    "gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.004.mixpods",
    "gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.005.mixpods",
]:
    caseroot = f"{mixpods.ROOT}/cesm/{casename}"
    print(caseroot)

    for stream in ["h", "hm", "hm.wci", "sfc"]:
        generate_references_for_stream(
            caseroot=caseroot,
            stream=stream,
            missing_stream="warn",
            existing_output="overwrite",
        )

    combine_stream_jsons_as_groups(caseroot=caseroot)

import pump

pump.catalog.build_mom6_catalog()
/glade/campaign/cgd/oce/projects/pump//cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.004.mixpods
/glade/u/home/dcherian/python/mom6-tools/mom6_tools/kerchunk.py:104: RuntimeWarning: No files found for caseroot: /glade/campaign/cgd/oce/projects/pump//cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.004.mixpods, stream: hm.wci
  warnings.warn(f"No files found for caseroot: {caseroot}, stream: {stream}", RuntimeWarning)
/glade/campaign/cgd/oce/projects/pump//cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.005.mixpods
/glade/u/home/dcherian/python/mom6-tools/mom6_tools/kerchunk.py:104: RuntimeWarning: No files found for caseroot: /glade/campaign/cgd/oce/projects/pump//cesm/gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.005.mixpods, stream: hm.wci
  warnings.warn(f"No files found for caseroot: {caseroot}, stream: {stream}", RuntimeWarning)
Successfully wrote ESM catalog json file to: file:///glade/campaign/cgd/oce/projects/pump/catalogs/pump-mom6-catalog.json
for casename in [
    "gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.004.mixpods",
    "gmom.e23.GJRAv3.TL319_t061_zstar_N65.new_baseline.kpp.lmd.005.mixpods",
]:
    mixpods.mom6_sections_to_zarr(casename)
100%|██████████| 33/33 [00:18<00:00,  1.75it/s]
100%|██████████| 33/33 [00:19<00:00,  1.68it/s]

Add TAO#

tao_gridded = mixpods.load_tao()
tree["TAO"] = DataTree(tao_gridded)
tree = tree.dc.reorder_nodes(["TAO", ...])
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/dataset.py:265: UserWarning: The specified chunks separate the stored chunks along dimension "depth" starting at index 42. This could degrade performance. Instead, consider rechunking after loading.
  warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/dataset.py:265: UserWarning: The specified chunks separate the stored chunks along dimension "time" starting at index 199726. This could degrade performance. Instead, consider rechunking after loading.
  warnings.warn(
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/dataset.py:265: UserWarning: The specified chunks separate the stored chunks along dimension "longitude" starting at index 2. This could degrade performance. Instead, consider rechunking after loading.
  warnings.warn(

Add LES, microstructure#

Check SST time series#

sst = (
    xr.Dataset(
        {
            casename: node["sst"].reset_coords(drop=True)
            for casename, node in tree.children.items()
        }
    )
    .to_array("case")
    .load()
)
sst.rename("sst").resample(time="D").mean().hvplot.line(muted_alpha=0, by="case")

Post-process catalog subset#

Slice#

tree = tree.sel(time=slice("2003", "2017"))
tree

<xarray.DatasetView>
Dimensions:  ()
Data variables:
    *empty*

N2-S2 histograms#

ref = tree["TAO"].ds.reset_coords(drop=True).cf.sel(Z=slice(-120, None))
counts = np.minimum(ref["S2"].cf.count("Z"), ref["N2T"].cf.count("Z")).load()


def calc_histograms(ds):
    ds = ds.copy()
    ds["tao_mask"] = counts.reindex(time=ds.time, method="nearest") > 5
    ds["tao_mask"].attrs = {
        "description": "True when there are more than 5 5-m T, u, v in TAO dataset"
    }
    # I did try masking out the model like the data
    # ds = ds.where(ds.tao_mask)
    return ds.update(mixpods.pdf_N2S2(ds))


tree = tree.map_over_subtree(calc_histograms)

daily composite#

dailies = tree.map_over_subtree(mixpods.daily_composites)
dailies
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/numpy/lib/nanfunctions.py:1217: RuntimeWarning: All-NaN slice encountered
  r, k = function_base._ureduce(a, func=_nanmedian, axis=axis, out=out,
<xarray.DatasetView>
Dimensions:  ()
Data variables:
    *empty*

euc relative frame#

# could be cleaned up
newtree = mixpods.bin_to_euc_centered_coordinate(tree)
# merge tree
for nodename, _ in tree.children.items():
    tree[f"{nodename}/euc"] = newtree[f"{nodename}/euc"]

euc_mean = mixpods.average_euc(newtree)
for nodename, _ in tree.children.items():
    tree[f"{nodename}/euc/mean"] = euc_mean[f"{nodename}"]

euc_subset = DataTree()
for nodename, _ in tree.children.items():
    euc_subset[f"{nodename}"] = tree[f"{nodename}/euc"]

euc_subset.to_zarr("mom6-euc-coordinate.zarr")
<__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
<__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
<__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
<__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
<__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
<__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
<__array_function__ internals>:200: RuntimeWarning: invalid value encountered in cast
euc = datatree.open_datatree("mom6-euc-coordinate.zarr", engine="zarr")
euc_mean = euc.dc.extract_leaf("mean")
tree = tree.dc.insert_as_subtree("euc", euc)
Merge in microstructure#
micro_zeuc = datatree.open_datatree(
    os.path.expanduser("~/datasets/microstructure/equix-tiwe-zeuc.average.nc")
).load()
for nodename, node in micro_zeuc.children.items():
    node["u"].attrs["standard_name"] = "sea_water_x_velocity"
    node["v"].attrs["standard_name"] = "sea_water_y_velocity"
    node["KT"].attrs["standard_name"] = "ocean_vertical_heat_diffusivity"
    node["ν"].attrs["standard_name"] = "ocean_vertical_momentum_diffusivity"

    del node["Rig_T"].attrs["standard_name"]
    del node["N2T"].attrs["standard_name"]
    # euc_mean[nodename] = node

for nodename, node in micro_zeuc.children.items():
    euc_mean[nodename] = node

Validate before continuing#

mixpods.validate_tree(tree)

Persist tree#

tree = mixpods.persist_tree(tree)
euc_mean.load()
<xarray.DatasetView>
Dimensions:  ()
Data variables:
    *empty*

Timescales of comparison#

Comparing a coarse climate model at 2/3° lateral grid spacing, and a time step of an hour, to microstructure observations that capture temperature variability at 7-15Hz is challenging.

The equator is a special place with strong diurnal variability in turbulence.

Capturing the daily timescale at the equator has been a vital benchmark for the KPP scheme (Large and Gent, 1999).

Interestingly we find that all simulations here reproduce a qualitatively similar variability at the daily frequency, even though variance is depressed at sub-daily and super-daily frequencies.

TODO: Frequency spectra comparison#

Mean#

Mean State Variable Profiles#

  1. A lot more shear, \(S, S^2\) just above the EUC when visc is turned down!

  2. We are very slightly lower on \(S^2\), \(N_T^2\) in the top 80m, compare TAO, kpp.lmd.004, new_baseline.kpp.lmd.004.

  3. Using the standard Ri_c=0.3, so deeper KPP surface layer, decreases \(S^2\), \(N_T^2\) in the top 60m. new_baseline.kpp.lmd.004 vs new_baseline.kpp.lmd.005

for context, Peters et al (1995) is interesting:

  1. Variability patterns at 50-350 m are distinctly different from the upper 50 m containing the diurnal cycle of mixing

  2. Large-scale shear of wavenumbers k < 0.01 cpm is associated with the slowly varying EUC and EIC. Large-scale is the dominant component of total shear above 50 m and in the thermostad, 170-270 m.

  3. Fine-scale shear, 0.01cpm < k < 0.5 cpm,provides the dominant component of total rms shear and exceeds the large-scale component in thick layersaroundthe coresof EUC and EIC, where Fr < 1, at 50-170 m and below 270 m.

  4. Variations of fine-scale shear cause variations in turbulent mixing; the large-scaleshear alone is a poor predictor of mixing.

  5. Lacking a model that links large-scale, fine-scale, and turbulent flow components,our service to equatorial modelers consists of describing general levels of eddy diffusivities and their variability patterns.

These models should not be resolving “finescale” shear, and the mixing is not well correlated with “large-scale shear”, so we need a “background” diffusivity/viscosity to make things work.

But see that std(\(S^2\)) is a lot stronger above the EUC core, relative to TAO for new_baseline.kpp.lmd.004 and new_baseline.kpp.lmd.005

S2 = mixpods.plot_profile_fill(tree, "S2", "S^2")
N2 = mixpods.plot_profile_fill(tree, "N2T", "N_T^2")
u = mixpods.plot_profile_fill(tree, "sea_water_x_velocity", "u")
T = mixpods.plot_profile_fill(tree, "sea_water_potential_temperature", "T")
Ri = mixpods.plot_median_Ri(tree)
%autoreload

plot = (
    (S2 + N2 + Ri + u + T)
    .cols(5)
    .opts(
        hv.opts.Curve(xlim=(-350, 0), frame_height=600),
        hv.opts.Layout(toolbar="left"),
        hv.opts.Overlay(legend_offset=(0, 200)),
        clone=True,
    )
)
select_legends(plot, figure_index=4, legend_position="right")
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/holoviews/plotting/bokeh/plot.py:967: UserWarning: found multiple competing values for 'toolbar.active_drag' property; using the latest value
  layout_plot = gridplot(

A deeper look at mixing below the EUC#

We find that tuning KPP’s shear scheme to better represent deep cycle turbulence by reducing \(Ri_0\) where median \(Ri_g\) is lower (0.25), prevents it from kicking in in the lower half of the EUC where median \(Ri_g\) is higher (1).

Microstructure observations from short term experiments (TIWE, EQUIX) do show higher viscosities \(ν \sim\) \SI{1e-4}{\meter\squared\per\second} below the EUC maximum.

We choose to preserve only TAO χpods above 90m depth, so there is very little data below the EUC core. For these data points, bins with less than 2000 hourly measurements are excluded.

With reduced \(Ri_0 \sim 0.5\), the KPP scheme is unable to reproduce these viscosities resulting a sharper EUC.

Peters et al (1995):

  1. Fine-scale shear, 0.01cpm < k < 0.5 cpm,provides the dominant component of total rms shear and exceeds the large-scale component in thick layersaroundthe coresof EUC and EIC, where Fr < 1, at 50-170 m and below 270 m.

  2. Variations of fine-scale shear cause variations in turbulent mixing; the large-scale shear alone is a poor predictor of mixing.

These simulations should not be resolving “finescale” shear, and the mixing is not well correlated with “large-scale shear”, so we need a “background” diffusivity/viscosity to make things work.

Indeed we see that additional background viscosity is necessary — compare new_baseline.hb to new_baseline.kpp.lmd.004.

tao_chi_counts = euc["TAO/chi"].count("time").load()
tao_chi_counts.hvplot.line(title="TAO χ number of hourly obs in bin")

euc_mean["TAO"] = euc_mean["TAO"].where(tao_chi_counts > 2000)

h = {
    varname: mixpods.map_hvplot(
        lambda node, name, muted: (
            node.ds.cf[varname]
            .reset_coords(drop=True)
            .hvplot.line(
                ylabel=varname,
                label=name,
                logx=varname != "sea_water_x_velocity",
                invert=True,
            )
        ),
        euc_mean,
    )
    for varname in [
        "sea_water_x_velocity",
        "sea_water_potential_temperature",
        "chi",
        "eps",
        "ocean_vertical_heat_diffusivity",
        "ocean_vertical_momentum_diffusivity",
    ]
}
h["chi"].opts(ylim=(1e-9, 1e-4))
h["eps"].opts(ylim=(1e-9, 1e-4))
h["ocean_vertical_heat_diffusivity"].opts(ylim=(5e-7, 3))
h["ocean_vertical_momentum_diffusivity"].opts(ylim=(1e-6, 1e-1))

h2 = {
    varname: mixpods.map_hvplot(
        lambda node, name, muted: node.ds[varname]
        .reset_coords(drop=True)
        .hvplot.line(ylabel=varname, label=name, logx=False, invert=True),
        euc_mean,
    )
    for varname in ["Rig_T"]
}

h

plot = (
    hv.Layout(list(h.values()) + [h2["Rig_T"].opts(ylim=(0, 3))])
    .opts(
        hv.opts.Curve(frame_width=150, frame_height=300, xlim=(-150, 150)),
        hv.opts.Layout(shared_axes=True),
        hv.opts.Overlay(show_legend=True, show_grid=True, legend_position="right"),
        *mixpods.HV_TOOLS_OPTIONS,
        *mixpods.PRESENTATION_OPTS,
    )
    .cols(4)
)
select_legends(plot, figure_index=3, legend_position="right")
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/holoviews/plotting/bokeh/plot.py:967: UserWarning: found multiple competing values for 'toolbar.active_drag' property; using the latest value
  layout_plot = gridplot(

Mean Turbulence Variable Profiles#

mixpods.map_hvplot(
    lambda ds, name, muted: ds.ds.cf["ocean_vertical_heat_diffusivity"]
    .mean("time")
    .hvplot.line(label=name, muted=muted),
    tree,
).opts(
    ylim=(1e-8, 1e1),
    legend_position="right",
    logx=True,
    invert_axes=True,
    frame_width=300,
    aspect=1 / 3,
)
WARNING:param.OverlayPlot211169: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.OverlayPlot211169: aspect value was ignored because absolute width and height values were provided. Either supply explicit frame_width and frame_height to achieve desired aspect OR supply a combination of width or height and an aspect value.
WARNING:param.OverlayPlot211290: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.OverlayPlot211290: aspect value was ignored because absolute width and height values were provided. Either supply explicit frame_width and frame_height to achieve desired aspect OR supply a combination of width or height and an aspect value.
mixpods.map_hvplot(
    lambda ds, name, muted: ds["eps"]
    .mean("time")
    .load()
    .hvplot.line(label=name, muted=muted),
    tree,
).opts(
    ylim=(1e-10, None),
    legend_position="right",
    logx=True,
    invert_axes=True,
    frame_width=300,
    aspect=1 / 3,
)
WARNING:param.OverlayPlot212317: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.OverlayPlot212317: aspect value was ignored because absolute width and height values were provided. Either supply explicit frame_width and frame_height to achieve desired aspect OR supply a combination of width or height and an aspect value.
WARNING:param.OverlayPlot212438: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.OverlayPlot212438: aspect value was ignored because absolute width and height values were provided. Either supply explicit frame_width and frame_height to achieve desired aspect OR supply a combination of width or height and an aspect value.
mixpods.map_hvplot(
    lambda ds, name, muted: ds.ds.cf["ocean_vertical_heat_diffusivity"]
    .mean("time")
    .load()
    .hvplot.line(label=name, muted=muted),
    tree,
).opts(
    ylim=(1e-8, 1e1),
    legend_position="right",
    logx=True,
    invert_axes=True,
    frame_width=300,
    aspect=1 / 3,
)
WARNING:param.OverlayPlot213366: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.OverlayPlot213366: aspect value was ignored because absolute width and height values were provided. Either supply explicit frame_width and frame_height to achieve desired aspect OR supply a combination of width or height and an aspect value.
WARNING:param.OverlayPlot213467: Due to internal constraints, when aspect and width/height is set, the bokeh backend uses those values as frame_width/frame_height instead. This ensures the aspect is respected, but means that the plot might be slightly larger than anticipated. Set the frame_width/frame_height explicitly to suppress this warning.
WARNING:param.OverlayPlot213467: aspect value was ignored because absolute width and height values were provided. Either supply explicit frame_width and frame_height to achieve desired aspect OR supply a combination of width or height and an aspect value.

Turbulence Variable distributions#

handles = [
    mixpods.plot_distributions(tree, "chi", bins=np.linspace(-11, -4, 101), log=True),
    mixpods.plot_distributions(tree, "eps", bins=np.linspace(-11, -4, 101), log=True),
    mixpods.plot_distributions(
        tree, "ocean_vertical_heat_diffusivity", bins=np.linspace(-8, -1, 101), log=True
    ),
    # plot_distributions(tree, "Jq", bins=np.linspace(-1000, 0, 51), log=False),
    mixpods.plot_distributions(tree, "Rig_T", np.linspace(-0.5, 1.5, 61))
    * hv.VLine(0.25).opts(line_color="k"),
]
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:761: RuntimeWarning: invalid value encountered in log10
  result_data = func(*input_data)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/xarray/core/computation.py:761: RuntimeWarning: invalid value encountered in log10
  result_data = func(*input_data)
plot = (
    hv.Layout(handles)
    .opts("Overlay", frame_width=500)
    .cols(2)
    .opts(*mixpods.HV_TOOLS_OPTIONS)
)
select_legends(plot, figure_index=1, legend_position="right")
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/holoviews/plotting/bokeh/plot.py:967: UserWarning: found multiple competing values for 'toolbar.active_drag' property; using the latest value
  layout_plot = gridplot(

The daily timescale#

Daily composites (Moum et al, 2023)#

Hard to interpret! I think a lot of this is bias in KPP surface layer vs actively mixing layer in obs. We can write better diagnostics to check this (Moum et al, 2023)

  1. The 89m χpod comparison is quite interesting. Suggests we do need more background visc

plot
%autoreload

plot = mixpods.plot_daily_composites(dailies, ["eps"], logy=True)
plot.opts(toolbar="left", show_legend=True)
%autoreload

mixpods.plot_daily_composites(
    dailies, ["S2", "N2", "Rig_T"], logy=False, legend=False
).opts(hv.opts.GridSpace(show_legend=True, width=200), hv.opts.Overlay(frame_width=200))

Daily composites: boundary layer depth#

mixing_layer_depth_criteria = {
    "boundary_layer_depth": {"name": "KPPhbl|KPP_OBLdepth|ePBL_h_ML"},
}

hbl = (
    tree.drop_nodes("TAO")
    .dc.subset_nodes("KPP_OBLdepth")
    .dc.concatenate_nodes()
    .reset_coords(drop=True)
).load()
(
    # hbl.groupby("time.hour").mean().hvplot.line(by="node", flip_yaxis=True)
    hbl.groupby("time.hour")
    .mean()
    .hvplot.line(by="node", flip_yaxis=True)
    .opts(show_legend=False)
    + hbl.to_dataframe().hvplot.hist(
        by="node",
        bins=np.arange(0, 90, 1),
        normed=1,
        alpha=0.3,
        ylim=(0, 0.05),
        muted_alpha=0,
    )
).opts(
    hv.opts.Curve(invert_yaxis=True),
    *mixpods.HV_TOOLS_OPTIONS,
    *mixpods.PRESENTATION_OPTS,
)

Seasonal timescale#

Seasonal mean profiles#

Heat Budget#

f, ax = plt.subplots(
    2,
    math.ceil(len(tree) / 2),
    constrained_layout=True,
    squeeze=False,
    sharex=True,
    sharey=True,
    figsize=(10, 3),
)

for axx, (name, node) in zip(ax.flat, tree.children.items()):
    mixpods.plot_climo_heat_budget_1d(node.ds, mxldepth=-40, penetration="mom", ax=axx)
    axx.set_title(name)

dcpy.plots.clean_axes(ax)

ENSO Timescale#

S2, N2 histograms#

Stability Diagram#

  1. ~Major change in La-Nina Warming for the new-baseline runs. We need to check ONI closely.~

    • fixed by just using the obs ONI everywhere.

      • This is probably OK since it shouldn’t drift too far away.

      • And we now average over the same time periods.

    • It is a pain to estimate this with branch runs,

    • but could be done if we spliced in the SST from the baseline case

  2. I still can’t reproduce the Warner and Moum (2019) figure where the El-Nino warming phase overlaps a lot less with La-Nina cooling

  • I’m only using data in the “deep cycle layer”.

    • have not matched vertical grid spacing yet.

    • could interpolate to TAO χpod depths

  • think about resampling to daily. Currently hourly.

  • think about instrumental error on vertical gradients for S2, N2;

    • if added to model, would it change the bottom tail.

%autoreload

mixpods.plot_stability_diagram_by_dataset(tree, nrows=2)

_images/a8614fc06cffa26d8a709a0e844595e8aa961301de9c278bf100764a3e61d70f.png

ε-Ri histograms#

%autoreload

mixpods.map_hvplot(
    lambda dt, name, muted: mixpods.plot_eps_ri_hist(
        dt["eps_ri"].load(), label=name, muted=muted
    ),
    tree.children,
).opts(
    hv.opts.Curve(ylim=(1e-8, 3e-6)),
    hv.opts.GridSpace(show_legend=True),
    hv.opts.Overlay(show_grid=True, legend_position="bottom_left"),
    *mixpods.HV_TOOLS_OPTIONS,
)
/glade/u/home/dcherian/miniconda3/envs/pump/lib/python3.10/site-packages/holoviews/plotting/bokeh/plot.py:588: UserWarning: found multiple competing values for 'toolbar.active_drag' property; using the latest value
  plot = gridplot(plots[::-1],